11 research outputs found
Towards Robust Off-Policy Evaluation via Human Inputs
Off-policy Evaluation (OPE) methods are crucial tools for evaluating policies
in high-stakes domains such as healthcare, where direct deployment is often
infeasible, unethical, or expensive. When deployment environments are expected
to undergo changes (that is, dataset shifts), it is important for OPE methods
to perform robust evaluation of the policies amidst such changes. Existing
approaches consider robustness against a large class of shifts that can
arbitrarily change any observable property of the environment. This often
results in highly pessimistic estimates of the utilities, thereby invalidating
policies that might have been useful in deployment. In this work, we address
the aforementioned problem by investigating how domain knowledge can help
provide more realistic estimates of the utilities of policies. We leverage
human inputs on which aspects of the environments may plausibly change, and
adapt the OPE methods to only consider shifts on these aspects. Specifically,
we propose a novel framework, Robust OPE (ROPE), which considers shifts on a
subset of covariates in the data based on user inputs, and estimates worst-case
utility under these shifts. We then develop computationally efficient
algorithms for OPE that are robust to the aforementioned shifts for contextual
bandits and Markov decision processes. We also theoretically analyze the sample
complexity of these algorithms. Extensive experimentation with synthetic and
real world datasets from the healthcare domain demonstrates that our approach
not only captures realistic dataset shifts accurately, but also results in less
pessimistic policy evaluations.Comment: 10 pages, 5 figures, 1 table. Appeared at AIES '22: Proceedings of
the 2022 AAAI/ACM Conference on AI, Ethics, and Society. Expanded version of
arXiv:2103.1593
Generalizability challenges of mortality risk prediction models: A retrospective analysis on a multi-center database
Modern predictive models require large amounts of data for training and evaluation, absence of which may result in models that are specific to certain locations, populations in them and clinical practices. Yet, best practices for clinical risk prediction models have not yet considered such challenges to generalizability. Here we ask whether population- and group-level performance of mortality prediction models vary significantly when applied to hospitals or geographies different from the ones in which they are developed. Further, what characteristics of the datasets explain the performance variation? In this multi-center cross-sectional study, we analyzed electronic health records from 179 hospitals across the US with 70,126 hospitalizations from 2014 to 2015. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by the race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm “Fast Causal Inference” that infers paths of causal influence while identifying potential influences associated with unmeasured variables. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st-3rd quartile or IQR; median 0.801); calibration slope from 0.725 to 0.983 (IQR; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (IQR; median 0.092). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. The race variable also mediated differences in the relationship between clinical variables and mortality, by hospital/region. In conclusion, group-level performance should be assessed during generalizability checks to identify potential harms to the groups. Moreover, for developing methods to improve model performance in new environments, a better understanding and documentation of provenance of data and health processes are needed to identify and mitigate sources of variation. Author summary With the growing use of predictive models in clinical care, it is imperative to assess failure modes of predictive models across regions and different populations. In this retrospective cross-sectional study based on a multi-center critical care database, we find that mortality risk prediction models developed in one hospital or geographic region exhibited lack of generalizability to different hospitals or regions. Moreover, distribution of clinical (vitals, labs and surgery) variables significantly varied across hospitals and regions. Based on a causal discovery analysis, we postulate that lack of generalizability results from dataset shifts in race and clinical variables across hospitals or regions. Further, we find that the race variable commonly mediated changes in clinical variable shifts. Findings demonstrate evidence that predictive models can exhibit disparities in performance across racial groups even while performing well in terms of average population-wide metrics. Therefore, assessment of sub-group-level performance should be recommended as part of model evaluation guidelines. Beyond algorithmic fairness metrics, an understanding of data generating processes for sub-groups is needed to identify and mitigate sources of variation, and to decide whether to use a risk prediction model in new environments
Generalizability challenges of mortality risk prediction models: A retrospective analysis on a multi-center database.
Modern predictive models require large amounts of data for training and evaluation, absence of which may result in models that are specific to certain locations, populations in them and clinical practices. Yet, best practices for clinical risk prediction models have not yet considered such challenges to generalizability. Here we ask whether population- and group-level performance of mortality prediction models vary significantly when applied to hospitals or geographies different from the ones in which they are developed. Further, what characteristics of the datasets explain the performance variation? In this multi-center cross-sectional study, we analyzed electronic health records from 179 hospitals across the US with 70,126 hospitalizations from 2014 to 2015. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by the race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm "Fast Causal Inference" that infers paths of causal influence while identifying potential influences associated with unmeasured variables. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st-3rd quartile or IQR; median 0.801); calibration slope from 0.725 to 0.983 (IQR; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (IQR; median 0.092). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. The race variable also mediated differences in the relationship between clinical variables and mortality, by hospital/region. In conclusion, group-level performance should be assessed during generalizability checks to identify potential harms to the groups. Moreover, for developing methods to improve model performance in new environments, a better understanding and documentation of provenance of data and health processes are needed to identify and mitigate sources of variation